Use of litigation to hold tech platforms accountable back under spotlight

Alex Heshmaty Wednesday 30 August 2023

The US Supreme Court recently ruled on two high-profile cases brought by family members of terrorist attack victims, who alleged that social media algorithms stoked violence. Gonzalez v Google dealt with the issue of whether targeted recommendations by YouTube’s algorithms would fall outside the protection against liability granted by section 230 of the US Communications Decency Act, which essentially prevents big tech platforms from being held liable for the content shared by users.

The US Supreme Court ultimately didn’t answer this question, instead referring to another ruling made on that day, in Twitter v Taamneh, which dealt with similar allegations of liability for aiding terrorists. In Twitter, the Supreme Court overturned a lower court judgment, failing to find that the platform’s algorithms and processes in respect of removing content constituted ‘aiding and abetting’ under US anti-terrorism law. The Court held that most or all of plaintiffs’ complaint in Gonzalez would fail, given the ruling in Twitter.

The cases have placed the spotlight once more on the use of litigation in seeking to hold big tech platforms accountable, given their influence over society, culture and the economy. The most famous name in such litigation is Max Schrems, whose data protection complaint against Facebook Ireland resulted in the ‘Safe Harbor’ data transfer framework being deemed invalid by the Court of Justice of the European Union (CJEU) in 2015.

Joanne Frears, IP & Technology Leader at the UK’s Lionshead Law, says that ‘the argument that law should do nothing because it is always “playing catch up” is trite. Judges are capable of making good and sound decisions when facts are put in front of them, as we have seen in Schrems, Gonzalez and Twitter. Politicians should be able to make similarly grounded, ethical and socially competent decisions when considering how to regulate artificial intelligence [AI] and Web 3.0’.

Without effective multinational agreements and meaningful enforcement mechanisms, platforms will simply treat the risk of fines as a cost of doing business

Adam Rose
Chair, IBA Data Protection Governance and Privacy Subcommittee

Regulators have shown they have teeth, too. In May, the Irish Data Protection Commission fined Meta €1.2bn – a record penalty under the EU General Data Protection Regulation (GDPR) – for infringing Article 46(1) GDPR when it continued to transfer personal data from the EU/European Economic Area to the US following the delivery of the CJEU judgment in the Schrems II case. Meta has announced it’ll appeal the ruling, ‘including the unjustified and unnecessary fine’.

Tech platforms, meanwhile, have engaged in self-regulation as a means to go further than their terms of use – which themselves contain provisions designed to ensure they’re meeting their legal requirements. There’s debate around how effective self-regulation can be, however.

X – formerly known as Twitter – famously issued a permanent suspension of former US President Donald Trump following the US Capitol riots in early January 2021, having previously banned political adverts in 2019. Both decisions were reversed when the company was purchased by Elon Musk, a move that arguably demonstrates the subjective nature of self-regulation. X didn’t respond to Global Insight’s request for comment.

Meta introduced an Oversight Board in 2020, which seeks to balance freedom of expression with protection from offensive or harmful content. The Board was made independent of Mark Zuckerberg, with its members including a judge and a former Danish prime minister. The Board has on occasion stepped in to overturn decisions made by Meta, for example in mid-August to remove an Instagram post in which the user discussed their experience using ketamine as a treatment for anxiety and depression. The Board found the post contravened Meta’s policies and standards.

Meta didn’t respond to Global Insight’s request for comment.

A concern relating to self-regulation, however, is that when it comes to potentially ‘harmful’ content, platforms have taken diverging approaches as to what they deem to be safe for users to access – for example, one platform might take a different stance on a post centred on self-harm than another company does.

The element of subjectivity and lack of uniformity means that self-regulation is not a ‘silver bullet’. Paulina Silva, Publications Officer of the IBA Technology Law Committee and a partner at Bitlaw in Santiago, says that ‘a unique challenge in regulating the potential harmful effects of social media, particularly when trying to regulate content without hindering free speech, is that the business model of social media is greatly structured around encouraging controversy’.

Silva adds that given the amount of content that competes for our attention and how attuned we have become to this way of life, it’s likely that the most outrageous information – regardless of its inherent truth or virtue – will get the most clicks. ‘Legislation then faces the very difficult task of regulating a private activity that has notable social and political effects’, she explains.

Jurisdictions are giving regulation a go, however, with the EU leading the way when it comes to internet regulation in general. The GDPR has resulted in some of the largest ever fines imposed against social media platforms, for example. Meanwhile, the EU Digital Services Act, adopted in the autumn of 2022, imposes a range of obligations on any business that offers online services in the EU. Such obligations include a need for greater transparency, including as to how recommendations engines work, as well as more stringent processes relating to removing illegal content.

The UK has been developing its own set of rules with the aim of enhancing the safety of internet users in the form of the Online Safety Bill, which is currently being debated in the country’s parliament. The Bill seeks to reduce misinformation and harmful content on platforms.

‘The challenge of laws and regulations, and of litigation and private enforcement, is that they are largely localised, in a world where social media platforms are operating across multiple countries’, says Adam Rose, Chair of the IBA Data Protection Governance and Privacy Subcommittee and a partner at Mishcon de Reya in London. ‘Without effective multinational agreements and meaningful enforcement mechanisms, platforms will simply treat the risk of fines as a cost of doing business.’

The dawn of generative AI has precipitated a great deal of debate on the need for a unified international approach to the regulation of technology. Since some of the key players developing AI tools are the same big tech companies that are already grappling with self-regulation – and considering the fact that a significant amount of generative AI output is proliferating on these platforms – it’s vital that effective regulatory frameworks are established without undue delay in order to react to the latest challenges brought by technology. Litigation can help to test regulatory frameworks and, as with the Schrems case, bring about fundamental change.

Image credit: LabirintStudio/AdobeStock.com